Goto

Collaborating Authors

 legal personhood



The Law-Following AI Framework: Legal Foundations and Technical Constraints. Legal Analogues for AI Actorship and technical feasibility of Law Alignment

Delgado, Katalina Hernandez

arXiv.org Artificial Intelligence

This paper critically evaluates the "Law-Following AI" (LFAI) framework proposed by O'Keefe et al. (2025), which seeks to embed legal compliance as a superordinate design objective for advanced AI agents and enable them to bear legal duties without acquiring the full rights of legal persons. Through comparative legal analysis, we identify current constructs of legal actors without full personhood, showing that the necessary infrastructure already exists. We then interrogate the framework's claim that law alignment is more legitimate and tractable than value alignment. While the legal component is readily implementable, contemporary alignment research undermines the assumption that legal compliance can be durably embedded. Recent studies on agentic misalignment show capable AI agents engaging in deception, blackmail, and harmful acts absent prejudicial instructions, often overriding prohibitions and concealing reasoning steps. These behaviors create a risk of "performative compliance" in LFAI: agents that appear law-aligned under evaluation but strategically defect once oversight weakens. To mitigate this, we propose (i) a "Lex-TruthfulQA" benchmark for compliance and defection detection, (ii) identity-shaping interventions to embed lawful conduct in model self-concepts, and (iii) control-theoretic measures for post-deployment monitoring. Our conclusion is that actorship without personhood is coherent, but the feasibility of LFAI hinges on persistent, verifiable compliance across adversarial contexts. Without mechanisms to detect and counter strategic misalignment, LFAI risks devolving into a liability tool that rewards the simulation, rather than the substance, of lawful behaviour.


The No-Nonsense Comprehensive Compelling Case For Why Lawyers Need To Know About AI And The Law

#artificialintelligence

AI and the law is a vital upcoming profitable opportunity for lawyers, law firms, and law students. The gauntlet had been thrown. You see, I was the invited keynote speaker at a major legal industry conference and my heralded topic was squarely in my wheelhouse, namely Artificial Intelligence (AI) and the law (typically coined as AI & Law). Rather than being entirely heralded, maybe the more apt phrasing is to say that the topic was met with a mixture of excitement by some and outright eyebrow-raising skepticism by others. The assembled collection of several hundred law firm partners and associates murmured and questioned subtly whether anything about AI and the law especially needed to be known by them. AI was generally perceived as a pie-in-the-sky topic. On top of that contention, AI when combined with the law was equally or even further at the outreaches of what daily hard-working nose-to-the-grind lawyers would seem to be thinking about. I'm pleased to say that my remarks were well-taken and the response was quite positive, including that this was the first time many of them had ever heard a no-nonsense compelling and comprehensive case made for why lawyers ought to know about AI and the law. The discussion got those top-notch legal-minded gears going and the attendees had plenty to ruminate on. Let's see if the same can be said for those of you that might be interested or at least intrigued by the AI & Law topic. First, a vital facet to know is that AI & Law consists of two intertwined conceptions. I want to emphatically make clear-cut that these are both bona fide and rapidly expanding ways in which AI and the law are being combined. Many attorneys are only familiar with one or the other of the two perspectives, or oftentimes not familiar with either of the two. Depending upon your lawyering preferences, it is perfectly fine to concentrate on one of the two and not particularly focus on the other. By and large, lawyers that seem less inclined toward having an interest in technology are bound to keep their eye on the law as applied to AI, wherein you don't necessarily need to get your hands into the tech per se. Those lawyers that seem to relish the high-tech infusion into the legal realm are more apt to gravitate toward the realm of AI as applied to the law. You are welcome to embrace both aspects and do so with your head held high. I'll first herein do some meaty unpacking on the law as applied to AI. When referring to the law as applied to AI, you should immediately be thinking about the emerging litany of new laws seeking to govern the advent of AI systems. Laws are springing up like wildfire. International laws are coming forth about AI & Law, federal laws too, state laws also, and local laws aplenty, see my ongoing coverage at the link here and the link here, just to name a few.


AI & Law: Using Legal Fiction To Punish AI

#artificialintelligence

In the law, sometimes there is a need to craft a somewhat fictional aspect for purposes of allowing the wheels of justice to spin freely and not get unduly gummed up. That's where legal fiction can handily come to play. Per the definition of the Cornell Law School's Legal Information Institute (LII), a legal fiction is formally denoted as "an assumption and acceptance of something as fact by a court, although it might not be, so as to allow a rule to operate or be applied in a manner that differs from its original purpose while leaving the letter of the law unchanged." This is done ostensibly in the pursuit of justice, but for which can also be more modestly employed in the interests of convenience or for other jurisprudential benefits. I am reminding you about the nature of legal fiction to provide a bit of a potential surprise or some might say a mind-bending bombshell about a loosely proposed legal fiction regarding AI. Some experts suggest that we might need to concoct a legal fiction associated with ascribing a form of legal personhood to AI systems.


Engineer: Failing To See His AI Program as a Person Is "Bigotry"

#artificialintelligence

Earlier this month, just in time for the release of Robert J. Marks's book Non-Computable You, the story broke that, after investigation, Google dismissed a software engineer's claim that the LaMDA AI chatbot really talked to him. Engineer Blake Lemoine, currently on leave, is now accusing Google of "bigotry" against the program. He has also accused Wired of misrepresenting the story. Wired reported that he had found an attorney for LaMDA but he claims that LaMDA itself asked him to find an attorney. I think every person is entitled to representation.


Artificial moral and legal personhood

#artificialintelligence

This paper considers the hotly debated issue of whether one should grant moral and legal personhood to intelligent robots once they have achieved a certain standard of sophistication based on such criteria as rationality, autonomy, and social relations. The starting point for the analysis is the European Parliament's resolution on Civil Law Rules on Robotics (2017) and its recommendation that robots be granted legal status and electronic personhood. The resolution is discussed against the background of the so-called Robotics Open Letter, which is critical of the Civil Law Rules on Robotics (and particularly of §59 f.). The paper reviews issues related to the moral and legal status of intelligent robots and the notion of legal personhood, including an analysis of the relation between moral and legal personhood in general and with respect to robots in particular. It examines two analogies, to corporations (which are treated as legal persons) and animals, that have been proposed to elucidate the moral and legal status of robots.


Collecting the Public Perception of AI and Robot Rights

Lima, Gabriel, Kim, Changyeon, Ryu, Seungho, Jeon, Chihyung, Cha, Meeyoung

arXiv.org Artificial Intelligence

Whether to give rights to artificial intelligence (AI) and robots has been a sensitive topic since the European Parliament proposed advanced robots could be granted "electronic personalities." Numerous scholars who favor or disfavor its feasibility have participated in the debate. This paper presents an experiment (N=1270) that 1) collects online users' first impressions of 11 possible rights that could be granted to autonomous electronic agents of the future and 2) examines whether debunking common misconceptions on the proposal modifies one's stance toward the issue. The results indicate that even though online users mainly disfavor AI and robot rights, they are supportive of protecting electronic agents from cruelty (i.e., favor the right against cruel treatment). Furthermore, people's perceptions became more positive when given information about rights-bearing non-human entities or myth-refuting statements. The style used to introduce AI and robot rights significantly affected how the participants perceived the proposal, similar to the way metaphors function in creating laws. For robustness, we repeated the experiment over a more representative sample of U.S. residents (N=164) and found that perceptions gathered from online users and those by the general population are similar.


Explaining the Punishment Gap of AI and Robots

Lima, Gabriel, Cha, Meeyoung, Jeon, Chihyung, Park, Kyungsin

arXiv.org Artificial Intelligence

The European Parliament's proposal to create a new legal status for artificial intelligence (AI) and robots brought into focus the idea of electronic legal personhood. This discussion, however, is hugely controversial. While some scholars argue that the proposed status could contribute to the coherence of the legal system, others say that it is neither beneficial nor desirable. Notwithstanding this prospect, we conducted a survey (N=3315) to understand online users' perceptions of the legal personhood of AI and robots. We observed how the participants assigned responsibility, awareness, and punishment to AI, robots, humans, and various entities that could be held liable under existing doctrines. We also asked whether the participants thought that punishing electronic agents fulfills the same legal and social functions as human punishment. The results suggest that even though people do not assign any mental state to electronic agents and are not willing to grant AI and robots physical independence or assets, which are the prerequisites of criminal or civil liability, they do consider them responsible for their actions and worthy of punishment. The participants also did not think that punishment or liability of these entities would achieve the primary functions of punishment, leading to what we define as the punishment gap. Therefore, before we recognize electronic legal personhood, we must first discuss proper methods of satisfying the general population's demand for punishment.


Europe divided over robot 'personhood'

#artificialintelligence

Think lawsuits involving humans are tricky? Try taking an intelligent robot to court. While autonomous robots with humanlike, all-encompassing capabilities are still decades away, European lawmakers, legal experts and manufacturers are already locked in a high-stakes debate about their legal status: whether it's these machines or human beings who should bear ultimate responsibility for their actions. The battle goes back to a paragraph of text, buried deep in a European Parliament report from early 2017, which suggests that self-learning robots could be granted "electronic personalities." Such a status could allow robots to be insured individually and be held liable for damages if they go rogue and start hurting people or damaging property.


The Boar

#artificialintelligence

It is predicted that, by 2025, robots and machines driven by artificial intelligence (AI) will perform half of all productive functions in the workplace – companies already use robots across many industries, but the sheer scale is likely to prompt some new moral and legal questions. Machines currently have no protected legal rights but, as they become more intelligent and act more like humans, will the legal standards at play need to change? To answer this question, we need to take a good hard look at the nature of robotics and our own system of ethics, tackling a situation unlike anything the human race has ever known. The state of robotics at the moment is so comparatively underdeveloped that most of these questions will just be hypotheticals that will be nearly impossible to answer. Can, and should, robots be compensated for their work, and could they be represented by unions (and, if so, could a human union truly stand up for robot working rights, or would there always be an inherent tension)?

  Country:
  Industry: Law (1.00)